Goto

Collaborating Authors

 virtualized gpu computing


Verge.io unveils shared, virtualized GPU computing

#artificialintelligence

Verge.io, the company with a simpler way to virtualize data centers, has added new features to its Verge-OS software to give users the performance of GPUs as virtualized, shared resources. This creates a cost-effective, simple and flexible way to perform GPU-based machine learning, remote desktop, and other compute-intensive workloads within an agile, scalable, secure Verge-OS virtual data center. Verge-OS abstracts compute, network, and storage from commodity servers and creates pools of raw resources that are simple to run and manage, creating feature-rich infrastructures for environments and workloads like clustered HPC in universities, ultra-converged and hyper-converged enterprises, DevOps, and Test/Dev, compliant medical and healthcare, remote and edge compute including VDI, and xSPs offering hosted services including private clouds. Current methods for deploying GPUs systemwide are complex and expensive, especially for remote users. Users and administrators can pass through an installed GPU to a virtual data center by simply creating a virtual machine with access to that GPU and its resources.